The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Photometric stereo recovers the surface normals of an object from multiple images with varying shading cues, i.e., modeling the relationship between surface orientation and intensity at each pixel. Photometric stereo prevails in superior per-pixel resolution and fine reconstruction details. However, it is a complicated problem because of the non-linear relationship caused by non-Lambertian surface reflectance. Recently, various deep learning methods have shown a powerful ability in the context of photometric stereo against non-Lambertian surfaces. This paper provides a comprehensive review of existing deep learning-based calibrated photometric stereo methods. We first analyze these methods from different perspectives, including input processing, supervision, and network architecture. We summarize the performance of deep learning photometric stereo models on the most widely-used benchmark data set. This demonstrates the advanced performance of deep learning-based photometric stereo methods. Finally, we give suggestions and propose future research trends based on the limitations of existing models.
translated by 谷歌翻译
The recurrent structure is a prevalent framework for the task of video super-resolution, which models the temporal dependency between frames via hidden states. When applied to real-world scenarios with unknown and complex degradations, hidden states tend to contain unpleasant artifacts and propagate them to restored frames. In this circumstance, our analyses show that such artifacts can be largely alleviated when the hidden state is replaced with a cleaner counterpart. Based on the observations, we propose a Hidden State Attention (HSA) module to mitigate artifacts in real-world video super-resolution. Specifically, we first adopt various cheap filters to produce a hidden state pool. For example, Gaussian blur filters are for smoothing artifacts while sharpening filters are for enhancing details. To aggregate a new hidden state that contains fewer artifacts from the hidden state pool, we devise a Selective Cross Attention (SCA) module, in which the attention between input features and each hidden state is calculated. Equipped with HSA, our proposed method, namely FastRealVSR, is able to achieve 2x speedup while obtaining better performance than Real-BasicVSR. Codes will be available at https://github.com/TencentARC/FastRealVSR
translated by 谷歌翻译
The increasing privacy concerns on personal private text data promote the development of federated learning (FL) in recent years. However, the existing studies on applying FL in NLP are not suitable to coordinate participants with heterogeneous or private learning objectives. In this study, we further broaden the application scope of FL in NLP by proposing an Assign-Then-Contrast (denoted as ATC) framework, which enables clients with heterogeneous NLP tasks to construct an FL course and learn useful knowledge from each other. Specifically, the clients are suggested to first perform local training with the unified tasks assigned by the server rather than using their own learning objectives, which is called the Assign training stage. After that, in the Contrast training stage, clients train with different local learning objectives and exchange knowledge with other clients who contribute consistent and useful model updates. We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks, and the proposed ATC framework achieves significant improvements compared with various baseline methods. The source code is available at \url{https://github.com/alibaba/FederatedScope/tree/master/federatedscope/nlp/hetero_tasks}.
translated by 谷歌翻译
Pre-trained Vision-Language Models (VLMs) such as CLIP have shown impressive generalization capability in downstream vision tasks with appropriate text prompts. Instead of designing prompts manually, Context Optimization (CoOp) has been recently proposed to learn continuous prompts using task-specific training data. Despite the performance improvements on downstream tasks, several studies have reported that CoOp suffers from the overfitting issue in two aspects: (i) the test accuracy on base classes first gets better and then gets worse during training; (ii) the test accuracy on novel classes keeps decreasing. However, none of the existing studies can understand and mitigate such overfitting problem effectively. In this paper, we first explore the cause of overfitting by analyzing the gradient flow. Comparative experiments reveal that CoOp favors generalizable and spurious features in the early and later training stages respectively, leading to the non-overfitting and overfitting phenomenon. Given those observations, we propose Subspace Prompt Tuning (SubPT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process, and successfully eliminate the overfitting problem. Besides, we equip CoOp with Novel Feature Learner (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set, needless of image training data. Extensive experiments on 11 classification datasets demonstrate that SubPT+NFL consistently boost the performance of CoOp and outperform the state-of-the-art approach CoCoOp. Experiments on more challenging vision downstream tasks including open-vocabulary object detection and zero-shot semantic segmentation also verify the effectiveness of the proposed method. Codes can be found at https://tinyurl.com/mpe64f89.
translated by 谷歌翻译
Recent breakthroughs in semi-supervised semantic segmentation have been developed through contrastive learning. In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space. However, there exist inaccurate pseudo-labels which map the ambiguous representations of pixels to the wrong classes due to the limited cognitive ability of the model. In this paper, we define pixel-wise representations from a new perspective of probability theory and propose a Probabilistic Representation Contrastive Learning (PRCL) framework that improves representation quality by taking its probability into consideration. Through modelling the mapping from pixels to representations as the probability via multivariate Gaussian distributions, we can tune the contribution of the ambiguous representations to tolerate the risk of inaccurate pseudo-labels. Furthermore, we define prototypes in the form of distributions, which indicates the confidence of a class, while the point prototype cannot. Moreover, we propose to regularize the distribution variance to enhance the reliability of representations. Taking advantage of these benefits, high-quality feature representations can be derived in the latent space, thereby the performance of semantic segmentation can be further improved. We conduct sufficient experiment to evaluate PRCL on Pascal VOC and CityScapes to demonstrate its superiority. The code is available at https://github.com/Haoyu-Xie/PRCL.
translated by 谷歌翻译
胸部X射线(CXR)中准确的异常定位可以使各种胸部疾病的临床诊断受益。但是,病变水平的注释只能由经验丰富的放射科医生进行,这是乏味且耗时的,因此很难获得。这种情况导致难以开发CXR的完全监督异常定位系统。在这方面,我们建议通过一个弱半监督的策略来训练CXR异常本地化框架,称为“超越阶级”(PBC),该策略(PBC)使用了少数带有病变级别边界框的完全注释的CXR,并通过广泛的弱化的样品和大量的带有注释的样品。点。这样的点注释设置可以通过边缘注释成本提供弱实例级信息,以实现异常定位。尤其是,我们的PBC背后的核心思想是学习从点注释到边界框的强大而准确的映射,以根据注释点的差异。为此,提出了一个正则化项,即多点的一致性,它驱动模型从相同异常内的不同点注释中生成一致的边界框。此外,还提出了一种被称为对称的一致性的自学,也提出了从弱注释的数据中深入利用有用的信息来实现异常定位。 RSNA和VINDR-CXR数据集的实验结果证明了该方法的有效性。当使用少于20%的盒子级标签进行训练时,与当前的最新方法相比,我们的PBC可以在MAP中提高〜5的改进(即点DETR)。代码可从https://github.com/haozheliu-st/point-beyond-class获得。
translated by 谷歌翻译
基于图像的虚拟试验是以人为中心的现实潜力,是以人为中心的图像生成的最有希望的应用之一。在这项工作中,我们迈出了一步,探索多功能的虚拟尝试解决方案,我们认为这应该具有三个主要属性,即,它们应支持无监督的培训,任意服装类别和可控的服装编辑。为此,我们提出了一个特征性的端到端网络,即用空间自适应的斑点适应性GAN ++(Pasta-gan ++),以实现用于高分辨率不合规的虚拟试验的多功能系统。具体而言,我们的意大利面++由一个创新的贴布贴片的拆卸模块组成,可以将完整的服装切换为归一化贴剂,该贴片能够保留服装样式信息,同时消除服装空间信息,从而减轻在未受监督训练期间过度适应的问题。此外,面食++引入了基于贴片的服装表示和一个贴片引导的解析合成块,使其可以处理任意服装类别并支持本地服装编辑。最后,为了获得具有逼真的纹理细节的尝试结果,面食gan ++结合了一种新型的空间自适应残留模块,以将粗翘曲的服装功能注入发电机。对我们新收集的未配对的虚拟试验(UPT)数据集进行了广泛的实验,证明了面食gan ++比现有SOTA的优越性及其可控服装编辑的能力。
translated by 谷歌翻译
相邻帧的比对被认为是视频超分辨率(VSR)中的重要操作。高级VSR模型,包括最新的VSR变形金刚,通常配备精心设计的对齐模块。但是,自我注意机制的进步可能违反了这种常识。在本文中,我们重新考虑了对齐在VSR变压器中的作用,并进行了几种违反直觉的观察。我们的实验表明:(i)VSR变形金刚可以直接利用来自非对齐视频的多帧信息,并且(ii)现有的对齐方法有时对VSR变形金刚有害。这些观察结果表明,我们可以仅通过删除对齐模块并采用更大的注意力窗口来进一步提高VSR变压器的性能。然而,这样的设计将大大增加计算负担,无法处理大型动议。因此,我们提出了一种称为斑块对齐的新的,有效的对准方法,该方法将图像贴片而不是像素对齐。配备贴片对齐的VSR变形金刚可以在多个基准测试上证明最先进的性能。我们的工作提供了有关如何在VSR中使用多帧信息以及如何为不同网络/数据集选择对齐方法的宝贵见解。代码和模型将在https://github.com/xpixelgroup/rethinkvsralignment上发布。
translated by 谷歌翻译
神经体系结构搜索方法寻求具有有效的体重共享超级网训练的最佳候选者。但是,最近的研究表明,关于独立架构和共享重量网络之间的性能的排名一致性差。在本文中,我们提出了提前引导的一声NAS(PGONA),以加强超级网的排名相关性。具体而言,我们首先探讨激活功能的效果,并提出基于三明治规则的平衡采样策略,以减轻超级网中的重量耦合。然后,采用了拖鞋和禅宗得分来指导超级网的训练,并具有排名相关性损失。我们的PGONA在CVPR2022第二轻型NAS挑战赛的SuperNet轨道中排名第三。代码可在https://github.com/pprp/cvpr2022-nas?competition-track1-3th-solution中找到。
translated by 谷歌翻译